University of Texas at San Antonio



**Open Cloud Institute**


Machine Learning/BigData EE-6973-001-Fall-2016


**Paul Rad, Ph.D.**

**Ali Miraftab, Research Fellow**



**Emotion Classification Based on EEG**


Md Musaddaqul Hasib, Dr Yufei Huang
*Dept of Electrical and Computer Engineering, University of Texas at San Antonio, San Antonio, Texas, USA .*
musaddaqul.hasib@gmail.com



**Dataset:** The EEG dataset on Music is collected directly from Dr Yuan-Pin Lin, San Diego. The adopted dataset has 12 subjects’ EEG data. Emotion was stimulated using music track. Each subject listened 24 music track in a day session followed by a survey. In that survey each subject rated each music trach as either happy or sad. Average length of each music track is 30 seconds. This study used a 14-channel Emotive headset (T7 and T8 excluded) with a sampling rate of 128 Hz.

**Outcome:** Prediction of emotions (happy, sad) using a deep neural network.

**Project Definition:** Differential laterality (six left-right pairs) and Differential caudality (four fronto-posterior pairs) will be estimated from power spectral density of five different EEG bands delta (1-4) Hz, theta (4-8) Hz, alpha (8-13) Hz, beta (13-30) Hz and gamma (30-43) Hz. From these I will have fifty dimensional feature space. Furthermore, a deep neural network model will be used to train my dataset. To define my two classes (Happy and Sad) I will use the survey report. Among the twelve subjects in the dataset I will use 80 percent subjects to train my dataset and remaining to predict my two classes.


In [ ]: